Improve the locking per key logic in the CachingService #187
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The existing locking implementation in the
CachingService
class is not robust. It is possible for the policy of exclusive execution per key to be violated, because the lock is released without proper memory fences. The details about when and why this can happen, can be found in this StackOverflow question, where I asked specifically about the locking code of theCachingService
class, and was answered by an expert.Also the existing spinning logic, using the
Interlocked.CompareExchange
andThread.Yield
APIs, is unlikely to be as efficient as the simplelock
statement. So with this pull request I am proposing to replace the existingint[] keyLocks
with anobject[] keyLocks
, initialize the array with anew object()
in each slot, and use the normal and trustworthylock
. It should have no observable difference compared with the current behavior, apart from enforcing properly the exclusive execution policy.Note: The code in this pull request has not been tested, and might have syntax errors.